We present an analysis of the Locally Competitive Algorithm (LCA), aHopfield-style neural network that efficiently solves sparse approximationproblems (e.g., approximating a vector from a dictionary using just a fewnon-zero coefficients). This class of problems plays a significant role in boththeories of neural coding and applications in signal processing. However, theLCA lacks analysis of its convergence properties and previous results on neuralnetworks for nonsmooth optimization do not apply to the specifics of the LCAarchitecture. We show that the LCA has desirable convergence properties, suchas stability and global convergence to the optimum of the objective functionwhen it is unique. Under some mild conditions, the support of the solution isalso proven to be reached in finite time. Furthermore, some restrictions on theproblem specifics allow us to characterize the convergence rate of the systemby showing that the LCA converges exponentially fast with an analyticallybounded convergence rate. We support our analysis with several illustrativesimulations.
展开▼